Search Results: "zugschlus"

22 December 2008

Marc 'Zugschlus' Haber: How to pin lenny?

Dear lazyweb, how do I pin lenny now and have that pin hold after lenny s release? Is there any method that will get me testing lenny now and stable lenny later and not testing squeeze?

22 November 2008

Marc 'Zugschlus' Haber: nVidia and current Kernels

My home workplace is slowly and steadily mutating into a never ending story. I do not remember blogging every aspect of it, but after three graphics cards, an even older mainboard and two DVB-S-Cards, my home workplace PC currently does what I expect it to do: Run Debian unstable, drive two 20 inch DVI TFT monitors with 1600x1200 pixels each and receiving DVB-S transmissions. I do not think that these are exaggerated expectations, but it took over three months to find a combination of hardware which will actually do what I want. The hardest part was finding a AGP graphics card which can drive two DVI monitors with 1600x1200 pixels each. After failing with two different Matrox cards (the G550 not being able to do 1600x1200 pixels if the monitors are connected via DVI), I finally settled on a used GeForce FX 5200. In the beginning, the binary nVidia module didn’t hurt as much as I expected. Unfortunately, this rapidly changed with the 2.6.27 Linux kernel. Unfortunately, the Linux kernel changed its interface with 2.6.27, which stopped the nvidia-kernel-source 173.14.09 from sid from compiling. Contrary to what is told in #500285, the driver doesn’t compile on my system even when I clean out /usr/src/modules/nvidia-kernel prior to unpacking the driver .tar.gz. I already thought that my problem was solved when I noticed nvidia-kernel-source 177.80 in Experimental. The module compiles just fine against kernel 2.6.27, but...
Nov 21 17:30:53 weave kernel: NVRM: The NVIDIA GeForce FX 5200 GPU installed in this system is
Nov 21 17:30:53 weave kernel: NVRM:  supported through the NVIDIA 173.14.xx Legacy drivers. Please
Nov 21 17:30:53 weave kernel: NVRM:  visit http://www.nvidia.com/object/unix.html for more
Nov 21 17:30:53 weave kernel: NVRM:  information.  The 177.80 NVIDIA driver will ignore
Nov 21 17:30:53 weave kernel: NVRM:  this GPU.  Continuing probe...
Nov 21 17:30:53 weave kernel: NVRM: No NVIDIA graphics adapter found!
Don’t closed source binary-only drivers just suck? I mean, granted, the FX 5200 is a five year old design, but it does its basic office job just fine. So I currently can choose to stick with a legacy kernel of the 2.6.26 series to be able to run the legacy nVidia driver, or can shell out money for a new graphics card and a new mainboard, a new CPU and new memory (since current graphics cards aren’t available for AGP any more). Just fine. Sucks. If I get around to buying a new box, which graphics card manufacturer is to be trusted these days? nVidia is out of the question after my current experiences, Intel doesn’t make graphics cards as far as I know, and ATI/AMD? I hear that they have recently seen the light, but are their open source drivers mature enough so that ATI/AMD products can already drive high resolution dual DVI setup? I currently do not care about 3D performance, I just want to push around windows on a flat KDE desktop and to some serious work. All I want is screen real estate and a picture of decent quality. I’d appreciate any hints and comments, and surely hope that there will be some kind of upstream support for the 2.6.26 kernels until I have refitted my home workplace hardware.

31 October 2008

Marc 'Zugschlus' Haber: No password no more

I apologize for inadvertently password protecting last night’s article about Nagios, Parent Hosts and traceroute on the Internet. Somehow, my Iceweasel has the firm opinion to enter my Blog’s admin password in all form fields labeled “password” on blog.zugschlus.de, and the article entry form happens to have one like that. And last night, I forgot to delete it. Sorry. Thanks to mika for making me aware of that.

30 October 2008

Marc 'Zugschlus' Haber: Nagios, Parent Hosts, and traceroute on the Internet

Nagios has the - very useful - feature of “parent hosts”. If it deems a host A being down, it first checks its parent host, B, and reports A only as down if B is up. This goes back recursively until a host with state “up” is found and only the first “down” host is actually reported. This keeps on-call people from being bombed with alerts in case of major network outages and makes sure that the alerts that are actually sent out do reasonably accurately describe the actual outage. As an individual who has some “external” servers in various data centers on the Internet, I would like to not be alerted multiple times that my servers at ISP C, D, and E are down if there is an outage at the ISP F hosting my Nagios installation or at one of the various exchange points temporarily rendering the servers unreachable (without me being able to do anything). The solution sounds easy but is surprisingly hard. How one could solve the issue For each host in the manual nagios configuration, do a traceroute and generate host stanzas for each of the hosts found:
$ sudo traceroute -I icmp -n -A 213.239.240.200
traceroute to 213.239.240.200 (213.239.240.200), 64 hops max, 28 byte packets
 1  <snip>
 2  217.243.221.150 [AS3320]  1 ms  0 ms  0 ms
 3  62.154.10.106 [AS3320]  1 ms  2 ms  2 ms
 4  217.239.40.226 [AS3320]  51 ms 217.239.40.234 [AS3320]  3 ms 217.239.40.226 [AS3320]  60 ms
 5  193.159.226.2 [AS3320]  3 ms  3 ms  3 ms
 6  213.239.240.200 [AS24940]  6 ms  7 ms (TOS=238!)  8 ms
$
This sounds easy and surely can be accomplished in a few hundred lines of perl. However, this makes it necessary to parse Nagios configuration if one wants to obtain the list of configured hosts directly from there. Unfortunately, the syntax of Nagios configuration is “historically grown” which makes it quite illogical and awfully hard to parse - this would easily double the size of the traceroute-based configuration generator. Additionally, the (real) traceroute given above shows three other issues that might arise from such a setup. Dynamic Routing Network paths are not as static as a traceroute suggests. On the Internet, the ISPs use BGP to connect their networks together and to ensure connectivity. In case of network reconfiguration (which may be caused by desired technical or policy changes or actual failures), the traceroute to a given network host may change. Considerable experience is needed to judge whether a change in the traceroute is only short-term while one looks at it, or long term meant to stay. Of course, only long term changes should be reflected to Nagios configuration. The possibility of changes makes it necessary to distinguish between “manually” and “automatically” generated Nagios host entries and to have a mechanism to quickly re-generate the “automatic” entries according to the current situation found on the network. Alternative Network Paths If you look at hop 4 of the traceroute given above, you see two IP addresses. This is a common “one packet left, one packet right” setup where two identical lines are sharing the load and this one sees different interfaces of the next hop router. Which IP address should one put into the parent host declaration then? Later Nagios versions supposedly are able to handle multiple parents to a host, but the docs of course don’t cover this complex configuration and I didn’t yet find the time to find out how this mechanism works and whether it can be used to represent a network layout like that in Nagios. Unpingable Routers For some network operators, ICMP echo request packets (ping) pose a significant load on their network equipment. To remedy that load, they have disabled ICMP echo reply transmission on these machines to keep the load down. This makes the router still show up in a traceroute (as this is done by sending packets with short TTL values and making them expire in transit, resulting in an ICMP TTL exceeded message being sent back), but it cannot be pinged. Hop Number 5 on the traceroute above is such a box: It shows up fine in the traceroute, but pinging it will always return “DOWN”. Currently, I simply omit such hosts from the Nagios configuration, but this may result in extra and inaccurate alerts once the unpingable host itself fails. A different possibility would be to check the host’s availabilty by sending an appropriately TTLed packet either to the host itself or to a host that is “behind” that host. The latter introduces even more complexity since one needs to find out a host “behind” the unpingable box, but this may be necessary since in the case of Hop Number 5 above, the IP address of the hop does not even seem to be in AS3320’s internal routing and tracing to the hop itself stops well before the expected place of the host regardless of from where one traces. Conclusion This is a surprisingly hard issue if one wants to do network monitoring while generating accurate alerts. Surely, it’s a topic that needs to be addressed in network monitoring, so I am quite interested in how other people tackle this. Please comment! Thanks in advance.

30 June 2008

Marc 'Zugschlus' Haber: Serial Console Server for the Poor III

This is the third installment of my article about the Serial Console Server for the Poor. First installment here, Second installment here. The first part of the article having covered the hardware and the udev part creating the device nodes, and the second part explaining how to solve the software part using ser2net, this part explains why ser2net was ditched in favor of cereal and how the console server operates with cereal now. Cereal (which I was pointed to on IRC as a reaction to article II) is a nifty combination of runit’s runsv and screen. One of the less-known features of screen is that screen can also connect to a serial port instead of a “real” console. cereal makes use of this feature, putting up with the fact that screen can only set a baud rate. I have not seen a serial application with other parameters than 8n1 (8 data bits, no parity, 1 stop bit) in a decade, so I cannot comment whether this is a real disadvantage. I guess that it might be possible to set the serial port to the desired set of parameters before starting up screen, but I didn’t actually try it since I do not need it. Cereal uses a set of configuration parameters and processes called a “session” to associate a serial port with a baud rate and a set of user/group settings to control who is able to start, stop, attach or detach a session. It doesn’t have a “real” configuration file. Instead, it has an admin program, cereal_admin, which takes the parameters associated with a session and writes them to an (intentionally? undocumented?) set of config files in /var/lib/cereal. Only a single user is able to attach to a session read/write, while an entire group can attach to a session read-only and can thus access the logs. A session is automatically started when the sysem boots and can be stopped and started using cereal_admin as well. Once a session is running, a specially configured screen process runs attached to the serial port, and users simply attach and detach to and from it as one would to a normally running screen. That way, one can scroll back and examine what was shown on the serial port while nobody was attached. This is quite handy when the device connected to the port prints out warnings, errors and status information to its serial port. The screen configuration that is used is rather sophisticated. So, a permanent log file is written, which is also used for read-only access to the serial session. Also, a very informative status line is configured. Unfortunately, one of screen’s biggest disadvantages applies here as well: User X cannot attach to a session started by User Y unless Y has access privileges for X’s pty (or vice versa, I never can remember), which can be a security risk. Additionally, it is not possible to su or sudo to X and then to attach to the session - one needs to have the original pty opened by the correct user, which usually means that one needs to do a “real” log in with the correct account (via /bin/login or sshing in to the box). If one has the privileges necessary, one can use the cereal executeable to attach to the screen (which gives read-write access) or to tail -F the log file (giving timestamped read-only access). That way, one never knows that one is actually working with an appropriately configured screen that is kept running by runit. The restrictions coming with cereal suggest a setup similiar to what I did with ser2net previously: Each session has its own user account, which forces an attach to the appropriately named cereal shell on login:
my-device-name:x:1801:1801:my-device-name,,,:/home/my-device-name:/usr/local/sbin/cerealshell
$ cat /usr/local/sbin/cerealshell
#!/bin/bash
set -eu
cereal attach $(id --user --name)
$
You can see that I decided to name the sessions after the actual device name, allowing the unterlying ttyUSB device name to change without change in the user interface. The actual users and sessions were created with a small shell script:
#!/bin/bash
createconsole()  
  NAME=“$1”
  PORT=“$2”
  BAUD=“$3”
  MUID=“$ PORT/USBserial/180 ”
  echo sudo addgroup --force-badname --gid $MUID $NAME
  echo sudo adduser --force-badname --gecos “$NAME,,,” --ingroup $NAME --uid $MUID --disabled-password
$NAME
  echo sudo adduser $NAME dialout
  echo sudo mkdir -p /home/$NAME/.ssh
  echo sudo chmod 755 /home/$NAME/.ssh
  echo sudo touch /home/$NAME/.ssh/authorized_keys
  echo sudo chmod 644 /home/$NAME/.ssh/authorized_keys
  echo sudo chown $NAME:$NAME /home/$NAME/.ssh /home/$NAME/.ssh/authorized_keys
  echo sudo cereal-admin create $1 /dev/$2 $3 $1 $1
  echo sleep 2
    echo sudo cereal-admin start $1
  echo echo -----
 
createconsole my-device-1 USBserial1 9600
createconsole my-device-2 USBserial3 115200
Note that the script also uses a hardcoded UID range which might need adaption to your local needs (and breaks for more than ten serial ports), and that it also takes care of creating an empty authorized_keys file with correct permissions. That way one can simply ssh to my-devlce-1@consoleserver and find oneself immediately connected to the serial console, seeing the last things that happened on the serial port, and immediately being able to work with the device. That’s fine, comfortable and useable. I actually like that better than what I built with ser2net previously.

22 June 2008

Christian Perrier: Console stuff not a secret, just neglected

Marc, from my experience, there is no 'secret' in the console-tools/kbd migration work. Just low involvment of all parties. I have watched the work on console handling stuff in Debian for a few years now, and my conclusion is just that nearly nobody cares about it now. I do maintain console-data that provides console fonts and keymaps. I took it over slowly from Alastair McKinstry (the console-tools maintainer mentioned in Marc's blog entry) because that involved maintaining localization of keymap names....that are used and visible in the installer. Ideas to switch the installer to console-setup and related tools are floating around. Just not achieved...because of lack of motivated manpower. Progress was made because of Anton Zinoviev and Colin Watson's work, but the project is currently hosed on a quite critical choice: About console-tools being installed by default: this is the consequence of it being Important while kbd is Extra. I think it should be up to kbd maintainers to push the change up. But this brings us back to the initial remark: indeed, now nobody cares about console handling, particularly when it comes at handling non-English environments. Those who still use the console in Linux environments apparently all do it with US keyboards and an English locale. For sure, kbd is more maintained than console-tools so, at least, we should switch to it. Funnily, our installer (even the graphical version) *still* relies on console keymaps during the installation process and, therefore, we rely on mostly unmaintained stuff here (people who think that I maintain console-data are plain wrong: I just keep it surviving...:-))

21 June 2008

Marc 'Zugschlus' Haber: kbd seems to be the way to go

This is just a small reminder (for me and others) that Debian is currently migrating from console-tools to kbd (back again, yes, those who have been around for a few years remember). This information is obviously a closely-guarded secret. Console-tools is still Priority: important, and kbd is still Priority: extra. However, kbd seems to be much better maintained (current uploads happening, while console-log has seen its last maintainer upload two years ago), and unfortunately, neither package description suggests which package is the way to go. And Debian-installer still installs console-tools by default. However, a few bugs were filed a year ago by the console-tools maintainer to drop console-tools from depends as console-tools is going away. So I guess that he knows what he’s doing... Before I get around to adding console-tools back to console-log’s depends (as I almost did accidentally), I’ll better blog this to remind people of console-log going away. Maybe we’ll get the Priorities changed just in time for lenny.

5 June 2008

Marc 'Zugschlus' Haber: Serial Console Server for the Poor II

This is the second installment of my article about the Serial Console Server for the Poor. First installment here. The last part of the article having covered the hardware and the udev part creating the device nodes, this part addresses the part of the software that connects the user to the device node. The idea of the software side is to allow users access only to certain serial ports without giving out a shell account on the server itself. Communication between user and the console server should be encrypted (which certainly means ssh), and users should be able to authenticate both with a key and with a password. Serial parameters (baud rate, parity, stop bits etc) should be configured on the console server to keep this possible error source away from the users. For years, I have been thinking about using UUCP’s cu to access the serial port. This time, it’s not going to be used because it offers a shell escape which cannot be turned off. That would give a user shell to the users indirectly, which is not what I want. Minicom seems still to be the most-used application to access a serial port. I have never understood the sense of using a terminal emulator inside a terminal emulator, especially if one has to break the terminal emulator’s habit of trying to initialize a non-existent modem. So I settled on the application that I usually use to access a serial port: ser2net. This is a Linux implementation of Cisco’s “reverse telnet” feature which “connects” a TCP port to a serial port with a given set of parameters. For example,
4016:telnet:600:/dev/USBserial6:38400 8DATABITS NONE 1STOPBIT banner
connects TCP port 4016 with /dev/USBserial6 with an 38400 8n1 parameter set. If you telnet to localhost 4016, you’ll find yourself talking to the device connected to the appropriate serial port. This moves the security risk to the telnet client, which - unfortunately - offers a shell escape function from the telnet command line reachable with the escape character. The only way to remove this is to use the -E command line option, which turns off the escape function completely. This unfortunately excludes the user from cleanly quitting the telnet session, so the only way to get out of a telnet -E is to have the remote side close the serial port (which ain’t happening on a serial console) or to kill the telnet process, for example by closing the carrying ssh session. That’s a turn-off, but one that one can live with. Now, the only challenge left is to have the appropriate telnet client started automatically for the user. I decided on connecting this to the account being connected to, so that ssh USBserial4@hostname will automatically do the right thing. Since password authentication needs to work (optionally), this precludes the use of the command modifier in an authorized_keys file. Another possibility to force a command being executed on login is to set it as the users’ login shell in /etc/passwd. One drawback: The command must not have any parameters, so one needs to pass needed parameters to the program some other way. Additionally, the program must be listed in /etc/shells or it will be silently ignored. My /etc/passwd entries look like
USBserial2:x:1002:1002:USBserial2,,,:/home/USBserial2:/usr/local/sbin/serialconnect-shell
with /usr/local/sbin/serialconnect-shell looking like
#!/bin/bash
set -eu
case “$(id --user --name)” in
  USBserial1) PORT=“2011”;; 
  USBserial2) PORT=“2012”;;
  USBserial3) PORT=“6013”;;
  USBserial4) PORT=“6014”;;
  USBserial5) PORT=“2015”;;
  USBserial6) PORT=“2016”;;
  USBserial7) PORT=“6017”;;
  *) echo “called from unknown account, terminating.”; exit 1;;
esac
telnet -E localhost $PORT
Just for reference, here are the appropriate entries in /etc/ser2net.conf:
2011:telnet:600:/dev/USBserial1:9600 8DATABITS NONE 1STOPBIT banner
2012:telnet:600:/dev/USBserial2:9600 8DATABITS NONE 1STOPBIT banner
6013:telnet:600:/dev/USBserial3:115200 8DATABITS NONE 1STOPBIT banner
6014:telnet:600:/dev/USBserial4:115200 8DATABITS NONE 1STOPBIT banner
2015:telnet:600:/dev/USBserial5:9600 8DATABITS NONE 1STOPBIT banner
2016:telnet:600:/dev/USBserial6:9600 8DATABITS NONE 1STOPBIT banner
6017:telnet:600:/dev/USBserial7:115200 8DATABITS NONE 1STOPBIT banner
If one wants to authenticate for a given serial port with an ssh key, all you need to do is put the appropriate key into the authorized_keys file of the appropriate account, and you’re all set. I do sincerely hope that I didn’t put any blatant security goofs into this setup, but if I did, you’ll tell me in the comments, right? Just one more challenge for my valued readers: The port numbers in my ser2net.conf have a system. Can you explain it to me?

3 June 2008

Marc 'Zugschlus' Haber: Serial Console Server for the Poor I

The serial port is still the way to access network components out of band. It is slow, but reliable, and remarkably well standardized. It does not have technical whiz-bangs that can fail when one needs things to just work. That makes it the natural way to access critical infrastructure and still being sure that this access vector still works when most other things are down. Every communication link has two sides, so there is a market for devices with a network link and a bigger number of serial ports to connect the actual devices to. Commercial vendors have a broad choice of serial console servers. Most of them, especially the small products with five to ten ports, are quite expensive, so I have been investigating how do build a serial console server with el cheapo hardware. USB comes to mind here, of course.
Picture of the fully equipped USB hub
The hardware side is actually quite easy: A seven-port USB hub, and seven USB-to-serial adaptors (using the widely-deployed Prolific PL2303) are easily purchased for well below a hundred Euros, and naively connected. In my test setup, connected to the hp compaq nc8000 that my blog readers should know only too well by now, the hub does not even need to have its power supply connected. The notebook can power the hub and the seven adaptors just fine. When the UMTS card is plugged in, the ttyUSBx device nodes appear in the order of plugging in the devices. When the system is booted, the order is not very predictable, so one needs - again - udev to give predictable device node names. udevinfo is an important tool to obtain the data set of an USB device which is available to match a device and to act appropriately when it is connected. udevinfo sorts the devices into a tree and shows the relationship of the devices, which device is “behind” which other device from the system’s point of view. udevinfo’s output starts at the leaf of the device tree and shows the information for each node up the tree until its root is reached. One pitfall is that one can only match against the leaf and _one_ other node more up the tree. So you cannot do matches like “the pl2303-based device connected to port 3 on hub 2-3 will be designated USBserial3”. If you try this, your rule will silently be ignored. This is a major turn-off in udev’s implementation. I didn’t try it on sid, but etch definetely shows this behavior. So one can only match like “the tty on the USB subsystem connected to port 3 on hub 2-3 will be designated USBserial3”, but that’s probably as good as it’ll get. The cause for my hub having only seven ports clearly shows itself when one looks at the udevinfo outputs of the fully equipped USB hub: The adapters are connected to ports 1, 2, 3, 4.1, 4.2, 4.3 and 4.4. The seven-port hub is actually two four-port hubs in a single case, with the second one connected to the first one’s port 4. This reflects itself, of course, in the udev rules necessary to address the adapters by the hub port they’re plugged in:
SUBSYSTEM==“tty”, SUBSYSTEMS==“usb”, DRIVERS==“usb”, \
   KERNELS==“*-*.1”, \
   SYMLINK=“USBserial1”
SUBSYSTEM==“tty”, SUBSYSTEMS==“usb”, DRIVERS==“usb”, \
   KERNELS==“*-*.2”, \
   SYMLINK=“USBserial2”
SUBSYSTEM==“tty”, SUBSYSTEMS==“usb”, DRIVERS==“usb”, \
   KERNELS==“*-*.3”, \
   SYMLINK=“USBserial3”
SUBSYSTEM==“tty”, SUBSYSTEMS==“usb”, DRIVERS==“usb”, \
   KERNELS==“*-*.4.1”, \
   SYMLINK=“USBserial4”
SUBSYSTEM==“tty”, SUBSYSTEMS==“usb”, DRIVERS==“usb”, \
   KERNELS==“*-*.4.2”, \
   SYMLINK=“USBserial5”
SUBSYSTEM==“tty”, SUBSYSTEMS==“usb”, DRIVERS==“usb”, \
   KERNELS==“*-*.4.3”, \
   SYMLINK=“USBserial6”
SUBSYSTEM==“tty”, SUBSYSTEMS==“usb”, DRIVERS==“usb”, \
   KERNELS==“*-*.4.4”, \
   SYMLINK=“USBserial7”
These rules will only work for a similary structured hub that is directly connected to the notebook. If there are more hubs in between the “final” hub and the host, the address matched against in the KERNELS matches are going to have a different structure. In the test setup, the first two parts of the address needed to be wildcarded since these addresses keep changing when the host is booted or the hub plugged out and in. So I guess that these udev rules are probably going to fail miserably when more hubs and/or USB-to-serial adaptors are connected. But if one does not do too many changes, things are just fine, and it is possible to put numbers on the USB hub ports and be sure that the USB-to-serial adaptor connected to port 4 is going to be /dev/USBserial4 in the host system. I tried plugging in and out the adaptors in random order, unplugged and plugged the hub itself, booted the host, and the relationship between the port and the device node was stable. In the test environment, I only tested one adapter at a time, but all these tests were successful as well. I guess that many more of these adaptors will eventually need a powered USB hub, but with seven adaptors on a single hub, the power from the USB host is sufficient to live without extra power on the hub. And, I guess, there is some hard limit for the number of USB adapters being used, and some soft limit when managing the ports and adapters is going to get harder. But, I think that a site in need of more than, say, sixteen serial ports on the console server can afford a “real” console server from one of the major vendors, which can be accessed via ethernet, UMTS, ISDN and whatever you can think of. This concludes the hardware part of my serial console server for the poor. Tomorrow, I’ll blog about the software side needed on the host to allow the serial ports to be accessed from the network while still being reasonably secure. The next article will involve ssh, /etc/passwd shells, /etc/shells, a small perl script, ser2net and telnet. And, shell escape is the cue word for grey hairs sprouting on my head.

Marc 'Zugschlus' Haber: Works with a more recent card as well

Today, I had the opportunity to try my UMTS initialization mechanism that I built this weekend with more recent hardware, a newer Option Globetrotter 3G Express Card with Vodafone branding (reporting itself to be a “Globetrotter HSDPA Modem” with Vendor ID 0xaf0 and Product ID 0x6701). To get the card connected to my test Notebook, a hp compaq nc8000, I had a “Expresscard in a PC card slot” adapter and a passive “Expresscard at a normal USB port” adapter. The USB adapter had cost about ten Euros, and I don’t imagine the PC card adapter to be much more expensive. The later Option card is recognized as three USB-connected serial ports by the Option kernel driver (CONFIG_USB_SERIAL_OPTION) as the older one is, and it works just fine. The only difference is that the udev rule needs a different value for the ATTRS modalias setting. My umts-pin script complains about an “illegal seek” after entering the PIN, but the card registers itself with the network nevertheless. Both gammu to send SMS and pppd for IP connectivity worked out of the box. Especially the “Expresscard-to-USB” adapter setup is very sexy for setups where neither an Expresscard nor a PC Card slot is available, and USB cables can be nice and long. So one could even mount the UMTS interface near the window and pull a longer USB cable to the actual system. I decided on putting the UMTS interface into a notebook, though, and mount the notebook near the window to be able to monitor the monitoring systems and send out SMS even when the whole datacenter is without power (by virtue of the notebook having a built-in UPS).

1 June 2008

Marc 'Zugschlus' Haber: Automatic initialization of a Option 3G Datacard

For mobile UMTS/GSM, I have been using an Option 3G Data Card for two and a half years now. I blogged about getting the card to work (in German, sorry) on Linux in July 2005. I never found the time - until now - to automate the card initialization so that I had been using a horrible chat script for card initialization when the PPP connection was built. I recently took the time to automate this, so that the PIN is transmitted to the card automatically when the card is plugged in. This article documents what I did. On a side note: Unfortunately, the vendors’ attitude towards Linux hasn’t changed since 2005. Their Hotlines still deny that their products can be used with Linux at all, and they surely do not publish any documentation that can be of help. Otoh, Vodafone has published a software that supposedly aids usage of their products under Linux. I haven’t tried it yet since it is not packaged yet for Debian. Additionally, Vodafone support media and sales do not seem to know about this effort, they still deny that their products work with Linux. Windows users happily install proprietary software products that do little more than sending a handful of AT commands to the emulated USB modem and hand over the connection to Windows’ PPP Stack. A very unsatisfying situation. Just for the record: Dear Vodafone DE, a week ago you missed the sale of a new USB UMTS interface because you don’t even document it on Linux. This motivated me to look into the drawer that holds the old, non-HSDPA PC cards that have been decommissioned at the customers’ site and use an old, used device. Your fault. But now back on topic: For a data center monitoring system, I want to send out alerts as text message (for German speakers: SMS - we have a rather strange vocabulary for mobile communications), and am reluctant to do so via IP: IP is one of the things that can fail, so it is preferred to directly inject the text messages into the network of the operator that provides service to the recipients of the text messages. Fortunately, most of the recipients are in the same network. After evaluating current hardware solutions that could be plugged directly into the “real” server doing the monitoring and running into the wall of non-support provided by the network operators and hardware vendors, I decided on taking things a little further and to build an Ethernet-Connected text message device. An old hp compaq nc8000 (no, not mine, mine still works fine and is in daily use) and an old, first generation Option Datacard 3G will be mounted near the datacenter’s windows and equipped with Debian and Nagios. That way, the text message device can itself monitor the monitoring system and even send out a warning message when the data center goes out of power: It has a built-in UPS. But now back on topic: For the text messages to go out reliably, it is necessary that the UMTS card is initialized automatically when the system comes up. A great opportunity to finally address this issue for Linux, on customers’ expense **grins**. UMTS card initialization is done in two steps: First, when the card is plugged in, the virtual OHCI host port appears. Initialization of the virtual OHCI happens automatically. Most documentation on the web says that you now need to load an appropriately parametrized usbserial module, and I did it this way for years, but that’s not necessary any more. CONFIG_USB_SERIAL_OPTION is a dedicated kernel module for the Option 3G data card, which gets automatically loaded if it is available. If you don’t have the Option module for some reason, you still need the following udev rule to automatically load the generic USB serial module. I figured that out yesterday before I learned about the Option module in a comment made to the original version of this article.
SUBSYSTEM==“usb”, \
   SYSFS idProduct ==“5000”, SYSFS idVendor ==“0af0”, \
   ACTION==“add”, \
   RUN+=“/sbin/modprobe usbserial vendor=0x$attr idVendor  product=0x$attr idProduct ”
SUBSYSTEM==“usb”, \
   SYSFS idProduct ==“5000”, SYSFS idVendor ==“0af0”, \
   ACTION==“remove”, \
   RUN+=“/sbin/modprobe -r usbserial”
Unfortunately, the usbserial and option modules stay around after the card is pulled. This doesn’t hurt though. Next, we see three virtual serial interfaces, which we can detect via udev, and assign “speaking” device names and transmit the PIN. This goes into /etc/udev/rules.d/50-option3g.rules:
SUBSYSTEM==“tty”, SUBSYSTEMS==“usb”, DRIVERS==“option”, \
   ATTRS bInterfaceNumber ==“00”, \
   ATTRS modalias ==“usb:v0AF0p5000d0000dc00dsc00dp00icFFiscFFipFF”, \
   SYMLINK=“UMTS-DATA”
SUBSYSTEM==“tty”, SUBSYSTEMS==“usb”, DRIVERS==“option”, \
   ATTRS bInterfaceNumber ==“02”, \
   ATTRS modalias ==“usb:v0AF0p5000d0000dc00dsc00dp00icFFiscFFipFF”, \
   SYMLINK=“UMTS-CONTROL”, RUN+=“/usr/local/sbin/umts-pin --device %k”
I am not sure what the modalias attribute identifies, but it looks like it identifies the hardware model: An identical Option Datacard 3G, plugged into the other PC Card Slot gets properly initialized as well. These two udev stanzas do two different things: First, they make sure that the new devices are symlinked to /dev/UMTS-DATA and /dev/UMTS-CONTROL, suggesting the intended use of the interface. The second virtual interface of the 3G Data Card does not seem to be useable at all, and you cannot build an actual data connection over the third. So, we baptize the first interface /dev/UMTS-DATA and ignore the second. The third gets called /dev/UMTS-CONTROL. Now to the real magic: /usr/local/sbin/umts-pin is a perl script that sends the PIN to the Card. %k gets expanded to the device name of the device that has just been detected, so umts-pin gets called with “--device ttyUSB2”. It makes use of Cosimo Streppone’s Device::Modem module (Device::Gsm is not yet packaged for Debian and might be of better use here) to talk to the UMTS device. Unfortunately, I have not yet fully understood the logging and answer processing features of Device::Modem, so the implementation might look a little clumsy. I hope that it can be clearly understood anyway. The actual PIN is read from a configuration file in an apache-like format, which is by default looked for in /etc/umts/pin.conf:
<pin>
        <default>
                pin xyzw
        </default>
</pin>
The reason I settled for this rather complex configuration file format is that the ultimate goal is to automatically detect which of my SIMs is currently inserted in the UMTS card and to automatically send the correct PIN. I haven’t been successful in doing so since I didn’t find an AT command yet to get the UMTS card to divulge the SIM serial number or the IMSI before the PIN is transmitted to the SIM. So currently, the only things that are supported are the “default” stanza and an identically formatted “override” stanza which takes precedence over the default stanza if present. If the first attempt to send the PIN fails, the script tries again ten seconds later. On my test system, this happens about once in twenty times. That must be some weird timing issue. So, if you have the wrong PIN configured, you’ll need the PUK after plugging in the card for the second time, since the first try will eat two of your three attempts. Beware. If you feel uncomfortable with all this scripting and the “quality” of my code, you can also use gammu’s entersecuritycode function. I discovered this after my PIN script was already written, and gammu currently forces you to expose the PIN on the command line (see #484102), and of course my script’s PIN handling is vastly superior **grins**. And one end note: More current hardware, most notably the Huawei series of cards, are reported to show themselves as virtual USB storage first to allow identification and installation of Windows drivers, and need an explicit command to show their virtual USB serial side. I have read web pages where people have managed to do this in Linux, but since I use an older card and do not have a current card available (again, Vodafone, your fault!) I cannot comment about this. But there are reports that this can be done, so don’t panic if you have more recent hardware.

31 March 2008

Marc 'Zugschlus' Haber: Does Debian need the local host name in /etc/hosts for IPv6?

Exim has the habit of trying to find out about its host names and IP addresses when it starts up. This has, in the past, been an issue for the Debian packages, since a Debian system might be on a dial-on-demand modem line with expensive costs and thus should not do unnecessary DNS lookup when the MTA is started. This article tries to describe the issue and which countermeasures debian took, and asks for tips how to solve this in the case of IPv6, where our past measures unfortunately do not directly apply. I’d like to solicit opinions from people who are more experienced than me with Unix, the local resolver library including /etc/hosts and /etc/nsswitch.conf, DNS, and - especially - the customs that apply on a system running IPv6. To avoid the extra DNS lookups, the Exim packages have a Debconf option to configure exim for “minimal DNS usage”, which hardcodes the hostname into Exim’s configuration at package configuration time. This was necessary since - without this option - exim looks up its own host name in the DNS even when a completely local operation is invoked. In some cases, exim still looks up its IP address when a listening daemon starts up. This is why the Debian installer configures 127.0.1.1 (not 127.0.0.1) for the local hostname on installation, yielding /etc/hosts files like
127.0.0.1       localhost
127.0.1.1       myfoo.localdomain   myfoo
# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
However, in the last few weeks I have heard a few cases where exim does IPv6 AAAA lookups when a listening daemon starts up. An strace shows a gethostbyname2 call for AF_INET6, and if we want to continue the line we went in the past, we’d need an IPv6 address for myfoo.localdomain in /etc/hosts as well. I am now wondering how this could be implemented. In IPv4, we have 127.0.0.0/8 available for the local host and could arbitrarily choose 127.0.1.1 to configure the local host name on. In IPv6, there is only ::1, which is a single address. Would it be possible to choose an arbitrary “link local” address on lo, the loopback interface? Or is there any better way? This being said, I consider the entire 127.0.1.1 business a horrible hack which is one of the most ugly things I have ever seen. Do we have a chance to implement this in a more cleaner way, or is it still the way to go for the distribution, where we don’t know zilch about the environment where an installed system is going to be used? This issue leads to people adding their local host name to ::1 in /etc/hosts, which might re-introduce other issues that we experienced in a phase when we did the same for 127.0.0.1, eventually ending up with 127.0.1.1, or to disabling IPv6 altogether, which is a bad thing in a time where IPv6 should be enabled, not disabled. So I’d like to find a clean solution which could then be implemented in whatever part of Debian might be responsible. I tried asking this question in other places, including Usenet, before pestering my Blog to ask the Lazyweb, but obviously the people I asked before do not care for the special environment that a Linux distribution has to take care of. The only answers I got were like “that would be the local administrator’s task to fix” and “this should be taken care of in the local DNS server/setup (maybe even on the local box being installed)”. A quite frustrating experience.

19 March 2008

Marc 'Zugschlus' Haber: Universal boot stick for Debian, grml and the Debian installer

For various reasons, I have the kernel and the initrd that my notebook needs to boot Linux on an USB stick. I recently added the Debian Installer and grml to the stick to allow additional uses of the stick. Usually, USB sticks are formated with FAT32 and when they’re used to boot Linux, Syslinux or some derivative is used. I do it differently on my boot stick, most prominently because I’d like to have Unixoid privileges on the stick, and because I’d like to continue using grub. So, my stick is formatted as ext3, and feels like an additional hard disk. Booting from an USB stick is sometimes tricky. I have made the best experiences with formatting the USB stick as an USB ZIP, which basically means having 64 heads, 32 sectors and a single partition in the fourth primary partition slot. This can be done using mkdiskimage and is documented on Debian in /usr/share/doc/syslinux/usbkey.doc. Partitioning and generation of the file system is as straightforward as is installing grub using the standard, documented methods for hard disks. I tried using grub 2, but have found it not sufficiently well-documented and stable to be used in this kind of critical environment. I am therefore still using grub-legacy despite the fact that the grub maintainers have ceased fixing grub-legacy bugs some months ago, refering to the pending release of grub 2. I use unison to keep /boot on the stick in sync with /boot of the Linux system, which is on a crypted filesystem on the hard disk and thus unuseable until the system has booted. The hard disk /boot thus serves only as installation target and sync source so that the stick does not need to be plugged in during upgrades. update-grub is configured so that running it creates a menu.lst which can be used from the stick. This concludes booting the production system from the stick. For the other systems, I have decided not to put them in the “default” boot menu from menu.lst, but instead include foo.lst files to the root of the stick, and when I need them, shell out from the grub menu to a command line by pressing “c” and then pulling in the new grub config file with the “configfile /foo.lst” command. grml is my favorite rescue system. I use it all the time once a system starts acting up. One of its biggest advantages is its extremely flexible bootup process. That made it predestined to be the rescue system on my USB boot stick. grml is currently making a transition to a new build environment, which has had the side effect that the boot process has changed in some details. grml-small, the 60 MB variant that I had been using before, has been discontinued and replaced by grml-medium, a 170 MB variant which does definetely not fit any more on a business card CD, but 128 MB USB sticks have become relatively seldom. For both grml flavours, all one needs to do is mount the .iso file loopback, copy the file’s contents to a subdirectory of the boot stick, and to drop this menu file to the stick:

title           grml-small
root            (hd0,3)
kernel          /grml-small/boot/isolinux/linux26 toram grml_dir=/grml-small/GRML/ grml_name=GRML ramdisk_size=100000
init=/etc/init lang=us BOOT_IMAGE=grml
initrd          /grml-small/boot/isolinux/minirt26.gz
title           grml-medium
root            (hd0,3)
kernel          /grml-medium/boot/grmlmedium/linux26 live-media-path=/grml-medium/live/ toram=grml-medium.squashfs
lang=us boot=live noeject noprompt keyboard=de
initrd          /grml-medium/boot/grmlmedium/initrd.gz

Since grml-medium works reasonably well, I do not expect to need grml-small very often any more. The Debian Installer guys have published an initrd which can browse nearly arbitrary media for an debian-installer .iso file, mount it and install from that image. I have made use of that feature to allow Debian to be installed from my boot stick. /d-i holds a current debian-testing-i386-netinst.iso, and vmlinux and initrd.gz from http://ftp.nl.debian.org/debian/dists/testing/main/installer-i386/current/images/hd-media/. With that, this d-i.lst:

title debian installer
root (hd0,3)
kernel /d-i/vmlinuz
initrd /d-i/initrd.gz

does the trick and boots up the Installer, ready to start. I guess that it would be easily possible to include these grub config stanzas to the default menu.lst, including grml and the Installer into the main grub menu, but I deliberately decided not to have these options directly in the menu for the time being. It is always amazing how flexible today’s systems have become when it comes to boot Linux. For both rescue and installation, this is needed, and the respective authors have done a remarkable job.

22 February 2008

Marc 'Zugschlus' Haber: Bounce from BTS to BTS

Sometimes a bug report is a labyrinth. #348046 is an example of this. It is a horrible mess of at least three different issues with half of the original participants having become unresponsive. I would like to pull the issues apart into different bug reports to be able to deal with them (and their probably unresponsive submitters) individually. Obviously, cloning and renaming is not an option since this copies the mess. So, it would probably be desireable to download the bug mbox and to bounce individual messages to new bugs (that have been created before), but the BTS recognizes the dupes and bins them. Blars has helped me by looking at BTS mail log, so it was clear that removing the X-Loop, X-Debian-PR, X-Spam, Resent- and Received headers from the mbox before loading it into mutt to do the actual bouncing works fine. A command line to do this:
rm -f mboxout; mboxbug formail -d -I Received -I X-Debian-PR -I X-Loop -I X-Spam -I Resent -s >> mboxout
After trying this in “production”, one needs to send one message per BTS pulse, or one will totally mess up the order of the messages. That’s a real pity and an annoyance.

21 February 2008

Marc 'Zugschlus' Haber: Debian Installer from an USB stick

For various reasons, I usually carry an USB stick with me that holds a single ext2fs and has grub installed. This blog entry quickly documents how to copy a Debian-Installer to it to be able to quickly install Debian without the need to burn a CD. I had tried this stunt last year already, but without success. I don’t know whether Debian Installer was not able to boot from an USB stick back then or if I was just too stupid to do things right. This procedure was done with today’s d-i daily image, so I guess that it will work this way in the future. I’d like the USB stick to be useable for other things even if d-i is available on it as well. So I would have liked d-i to be confined to a single subdirectory, which is possible with today’s code. The Installation Manual is rather verbose about booting from USB stick. The only challenge is to find the Kernel and the initrd files that are needed to actually boot. The exact path to these two files is not in the manual, but the nice people on #debian-boot confirmed that these are the correct ones. The second challenge was a non-issue for me since grub was already installed and functional on the USB Stick (its installation was also surprisingly straightforward). Next step: Create a directory on the USB stick and copy vmlinuz, initrd.gz and an arbitrary Debian CD image there. I used the daily netinst image. root (hd0,3); kernel /d-i/vmlinuz expert; initrd /d-i/initrd.gz and boot That’s it. The installer boots, finds the CD image, mounts it and pulls its components from there. The rest is a normal, stock Debian installation. Debian Installer rocks. It nearly always does the right thing.

3 January 2008

Marc 'Zugschlus' Haber: Late Happy Birthday, #405040

Sometimes, it is nearly as frustrating to use Debian than it is to use commercial software. For example, when one sees a simple bug completely unreacted on for more than one year. #405040 has passed its first anniversary since it was reported and touched for the last time. Visible reaction of the package maintainer: Nil. It’s a small thing, but an annoying one. And I still consider it unacceptable to let bugs rot for a year without the slightest trace of action.

17 November 2007

Marc 'Zugschlus' Haber: Beware of 2.6.23.x kernel on systems that were installed a long time ago

In a nutshell: If your system is kind of older than sarge (as in installation date, updates done in the mean time don’t matter), beware of 2.6.23.x or update your grub boot sector, which Debian doesn’t do automatically on package installation. After 2.6.23.1 lasting for more than a month, I decided on wednesday to finally roll 2.6.23.1 out to most systems. Only to be overrun by seven releases in a row (which even kernel.org infrastructure not properly handled). Ok, so it’s 2.6.23.8. Installation of kernel on my test and semi-productive systems went just fine, so I took the opportunity of being in the datacenter for some unrelated matter on a saturday to update two productive boxes. Just to find them not coming up again. The first box, not connected to a KVM switch, but only to a serial console, looked like it simply died at the place where grub usually says “Loading Linux...”, while the second box, on a “real” KVM switch, was a little more helpful, saying “No setup signature found” (or something along this wording). Since there was no internet on site with the boxes down (one being the main router, the second one being the full service DNS resolver), I quickly chose the fall-back kernel and got back in business. Googling revealed that this is an issue connected with old grub and kernel 2.6.23, and indeed, running grub-setup brought both systems back online. Had I done this - as usual - from remote, this would have been an unexpected downtime of at least half an hour. Since all systems in question have the same version of grub (the one from etch) installed, I proceeded to investigate what has been going wrong here. Finding that grub does not have any maintainer scripts at all, it would have been my responsiblity to update the grub code in the boot sectors to the new version after updating to etch. The package didn’t bother doing this, and this is also not documented in README.Debian or NEWS.Debian. As this is a major pitfall for systems that have been installed pre-sarge, and since updating the Distribution does not solve the issue, I filed Bug #451701. I hope that this blog entry saves other people from unexpected downtimes. Ah, yes, and barfing the error message on an unused console is a bad foul of grub, see #451710

4 November 2007

Marc 'Zugschlus' Haber: exim4 vs. OpenSSL vs. GnuTLS

Judging from the long list of exim4 bugs, especially #446036, I find myself between a rock and a hard place, and having to choose between staying with GnuTLS and accepting a probably continuing flow of technical issues, or moving over to OpenSSL, setting an example against GNU software, and probably generating a new flow of license issues. GnuTLS is the “clearer” solution from a license point of view: The client library is LGPL, and it can thus be safely linked to everything that is part of Debian main. This is the main reason why we decided to use GnuTLS for exim4 years ago, so that we do not need to worry about licensing issues when some other library is linked to by exim some time in the future. On the other hand, my impression gets stronger and stronger that GnuTLS is not ready for prime-time. There is a truckload of interoperability problems with a number of “remote sides”. The most prominent issues are TLS aborts when exim4 is SMTP server for some clients, most notably Incredimail and some mobile Phones from Nokia and other vendors. Most of them can be nailed down to misnegotiation of certain ciphers, but exim does currently not allow disabling of some ciphers at run-time, and - even worse - one would need to disable them completely since one cannot judge in advance whether we are facing a client with issues or one without. The GnuTLS maintainers of both Debian and upstream try being helpful, but I do not see a long-term solution here. Additionally, nobody upstream knows its way around the GnuTLS related code in exim, which was contributed by Nikos Mavroyanopoulos years ago. So, there is little chance to get GnuTLS-related bugs ironed out from exim. OpenSSL, on the other hand, does not have these interoperability issues. At least, I haven’t heard of any, and the recommended fix for the reporters of GnuTLS related bugs, “recompile exim with OpenSSL” (which the packaging has been supporting for nearly two years now), usually fixes their problems. I am not in a position to judge whether this is caused by people actually testing against OpenSSL, or OpenSSL generally being more tolerant towards strange implementations, or GnuTLS being buggy or poorly written. However, building exim4 again OpenSSL may post the license issues that convinced us to use GnuTLS in the first place. Exim itself has an OpenSSL exception, and from what I have been told, the MySQL FLOSS License Exception allows linkage to OpenSSL as well. At least, it does now, since historical evidence in the exim4 package either shows that it used to be a license violation to link MySQL and OpenSSL in the past, or that we (the Debian exim4 team) wrongly thought so. On the other hand, if it is illegal to have OpenSSL and “really” GPLled code in the same execution space, we already have that license violation today, brought to us by PostgreSQL, which links against OpenSSL. ftpmaster has already indicated that they won’t consider an exim4 linked against OpenSSL a license violation and that the packages would go through, the packaging can handle the change by virtue of flipping a switch and rebuilding, but I am still not fully convinced that doing so is the right thing from a political and license point of view. I might still need that final nudge by feedback to this blog article, or have my reluctance fed. Please comment. No, I do not plan to take this to debian-legal; I’d prefer exim4 staying in Debian main.

31 October 2007

Marc 'Zugschlus' Haber: Concurrently playing sounds still an issue in 2007?

Dear Lazyweb, which burning hoops need I to jump through to be able to listen to music played by Amarok without having to disable the KDE sound system in Control Panel before? If I don’t, Amarok complains that it cannot initialize any sound driver." Sound Interface is the on board Intel AC97 stuff of the notebook, KDE is set to play through ALSA, Amarok plays through xine which is set up to play through alsa, and two alsaplayers happily play concurrently. Any ideas?

16 October 2007

Marc 'Zugschlus' Haber: A thousand things I never wanted to know about X

In the last few days, I have replaced the two 20 inch CRT monitors that I have hardly used the last years with two 20 inch TFT displays, and my company (finally) gave me a 19 inch TFT display to accompany my notebook display at work. Maybe I should take that as a hint that they want to see me in the office more frequently rather than in my home office which I generously use these days. At home, I built a “new” computer from mainly used parts to drive the two 20 inchers. I have learned a lot about X in the last days, but spent too much time with it. I have been running my Desktop with older X and two CRTs for quite some time, but I hardly ever used it due to the setup of my now ex-flat. Back then, I learned that the “two monitors, one display, one screen, one desktop” option of X was called Xinerama and statically configured: You write down your display layout in xorg.conf and it uses it. Configuration changes mean restarting the X server. KDE notices Xinerama automatically, designates one monitor the “main monitor” and displays its panel there. You can move windows from one monitor to the other one, but changing virtual desktops always changes what both monitors display. This is not really what I want, since I often have “static” contents (news reader, mail reader, chat windows, help screens etc) on one monitor while I do my normal work on the other monitor, which involves changing virtual desktops. So I was rather happy when I learned upon the arrival of the external TFT display for the notebook that you can work without Xinerama. In that case, the X server comes up with one display and two screens, which KDE uses to display two independent desktops which can be independently configured, but can both be accessed with one mouse and one keyboard. For me, this means that I finally can independently change virtual desktops on both monitors, but pay the price that I cannot display a certain virtual desktop on monitor 1 now, and on monitor 2 in five minutes, and that a window opened on monitor 1 is never going to be shown on monitor 2 without serious interference. This looked ok for me. Additionally, you can use different font sizes on the different monitors as defaults, which comes in handy if both monitors are blatantly different in size (which is the case with the notebook and its external monitor). However, with that setup, KDE seems to be challenged when it comes to saving the configuration. Some things (such as desktop background and panel config) seem to save fine, desktop independent, other things (such as open windows and their position) are lost when the session exitst. My daily visit to Kevin&Kell (which used to be in a browser window that was saved with my session) has badly suffered since then. Then, a few weeks ago, somebody talked me into trying the X server from experimental. Which is when my dual-desktop setup ceased working. The nice people on #debian-x told me that Xinerama is a thing of the past, and is currently in the process of being replaced by Xrandr 1.2. The removal of “my” two monitor, two screens, two desktop setup is collateral damage, and if I still want that behavior, the X.org upstream considers this an issue to be handled by the Window Manager / Desktop Environment now. Gee, thanks. That’s what I call a regression. After spending half a day cursing and ranting, and even considering to go back to lenny’s (or even etch’s) X.org, I finally settled on going with the “new” way as I was going to lose my independent desktops sooner or later anyway. Again, with the friendly help of the people on #debian-x, I found out about a lot of advantages of the latest X servers. In most basic cases, the latest X server does not need a configuration file any more. It autodetects your hardware and chooses whatever modes it finds optimal. It does a pretty decent job in doing so. If one wants to influence how the X system comes up, once can always write an Xorg.conf file - the configfileless server even writes the “virtual config file” it uses to the log as a starting point. Additionally, almost everything regarding screen layout can be reconfigured at run time using the xrandr binary from a command line. Even a hardware change (such a new monitor plugged into the powered on notebook) is correctly detected. Cute. This allows me to reconfigure my notebook depending on where it currently is and which external monitor (or projector) is connected. This can be perfected if I find out whether X offers a hook to plug a reconfiguration script into that is called when the X server starts or when the system is waking up from standby or hibernation with the X server running. A lot to do, and great potential for elegant stuff. However, while I am talking about hibernation, this is a downside of the new driver: When the system wakes up from hibernation, the X server sometimes confuses itself enough to become nearly unuseable: I had it become psychotic when plugging in the external monitor (which can do 1600x1200 points) after waking it up from hibernation at the office. The internal display can only do 1400x1050, and the box thought that the external monitor only had 1050 y axis pixels, displaying bit garbage in the lower part of the big monitor, and showing a useable, but unintelligible panel. When opening any browser or any other application with considerable text amounts in the window, all fonts become unreadable once one scrolls down. Restarting the X server helps and everyting is fine. But I do not use hibernation to force myself to log out of X after waking up. After going back to the “one monitor mode”, the X server now thinks that my virtual desktop is 1600x1200 and only shows 1400x1050, with dead space at the right and lower side of the virtual desktop. xrandr does not allow me to reisize the FB below 1600x1200. No idea what’s going on here. In most of the cases, calling xrandr --auto for each output will fix the issues at run time, but sometimes a session restart is needed. Code needs to stabilize quite a bit before becoming useable in production. There are a few disadvantages that will remain: Both monitors will change their contents when the virtual desktop is changed, and both displays will use the same default font size (which might be inappropriate if both displays are of different size). But I think, when the issues are cleared up, that I can live with these disadvantages. The “new” desktop system that I built has a Matrox G450 dual head graphics card, and unfortunately, the Xrandr 1.2 enabled X server from experiemental has shown to be unuseable: Upon some mode changes, the X server goes into an endless loop and proves itself to be unkillable short of SIGKILL. And if it dies, it sometimes pulls the entire system with it. There is a corresponding bug in the upstream BTS, but it hasn’t received any noticeable attention by the upstream developers. So I might find myself configuring one last Xinerama config before I can finally completly migrate to Xrandr 1.2 on the desktop as well. Then I’ll probably need to bug the KDE upstream people to implement a two-desktops-on-a-single-screen feature in one of their future versions so that I can get my independent-desktop-things back. But that’s a far far away future.

Next.

Previous.